Goto

Collaborating Authors

 rethinking transformer


Supplementary Materials for NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning

Neural Information Processing Systems

Right: Normalized attention scores processed by two different normalization methods. Table 1: Performance of searched architectures using different NAS algorithms in DARTS [ 7 ] space on CIFAR-10 [ 5 ]. The inference latency was measured on a machine with GeForce RTX 3090 GPU. The batch size was set to 1. Encode(ms) Infer(ms) Total(ms) NAR-Former 2.4784 17.4864 19.9648 NAR-Former V2 2.3722 5.2276 7.5998 may be somewhat different. Due to the softmax, Eq. ( 5) focuses almost all attention on the current The Eq. ( 2) restricts attention to connected nodes by introducing the adjacency matrix.


From Scaling to Structured Expressivity: Rethinking Transformers for CTR Prediction

Yan, Bencheng, Lei, Yuejie, Zeng, Zhiyuan, Wang, Di, Lin, Kaiyi, Wang, Pengjie, Xu, Jian, Zheng, Bo

arXiv.org Artificial Intelligence

Despite massive investments in scale, deep models for click-through rate (CTR) prediction often exhibit rapidly diminishing returns - a stark contrast to the smooth, predictable gains seen in large language models. We identify the root cause as a structural misalignment: Transformers assume sequential compositionality, while CTR data demand combinatorial reasoning over high-cardinality semantic fields. Unstructured attention spreads capacity indiscriminately, amplifying noise under extreme sparsity and breaking scalable learning. To restore alignment, we introduce the Field-Aware Transformer (FAT), which embeds field-based interaction priors into attention through decomposed content alignment and cross-field modulation. This design ensures model complexity scales with the number of fields F, not the total vocabulary size n >> F, leading to tighter generalization and, critically, observed power-law scaling in AUC as model width increases. We present the first formal scaling law for CTR models, grounded in Rademacher complexity, that explains and predicts this behavior. On large-scale benchmarks, FAT improves AUC by up to +0.51% over state-of-the-art methods. Deployed online, it delivers +2.33% CTR and +0.66% RPM. Our work establishes that effective scaling in recommendation arises not from size, but from structured expressivity-architectural coherence with data semantics.


Supplementary Materials for NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning

Neural Information Processing Systems

Right: Normalized attention scores processed by two different normalization methods. Table 1: Performance of searched architectures using different NAS algorithms in DARTS [ 7 ] space on CIFAR-10 [ 5 ]. The inference latency was measured on a machine with GeForce RTX 3090 GPU. The batch size was set to 1. Encode(ms) Infer(ms) Total(ms) NAR-Former 2.4784 17.4864 19.9648 NAR-Former V2 2.3722 5.2276 7.5998 may be somewhat different. Due to the softmax, Eq. ( 5) focuses almost all attention on the current The Eq. ( 2) restricts attention to connected nodes by introducing the adjacency matrix.


Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis

Neural Information Processing Systems

Histopathology Whole Slide Image (WSI) analysis serves as the gold standard for clinical cancer diagnosis in the daily routines of doctors. To develop computer-aided diagnosis model for histopathology WSIs, previous methods typically employ Multi-Instance Learning to enable slide-level prediction given only slide-level labels.Among these models, vanilla attention mechanisms without pairwise interactions have traditionally been employed but are unable to model contextual information. More recently, self-attention models have been utilized to address this issue. To alleviate the computational complexity of long sequences in large WSIs, methods like HIPT use region-slicing, and TransMIL employs Nystr\"{o}mformer as an approximation of full self-attention. Both approaches suffer from suboptimal performance due to the loss of key information.


NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning

Neural Information Processing Systems

As more deep learning models are being applied in real-world applications, there is a growing need for modeling and learning the representations of neural networks themselves. An effective representation can be used to predict target attributes of networks without the need for actual training and deployment procedures, facilitating efficient network design and deployment. Recently, inspired by the success of Transformer, some Transformer-based representation learning frameworks have been proposed and achieved promising performance in handling cell-structured models. However, graph neural network (GNN) based approaches still dominate the field of learning representation for the entire network. In this paper, we revisit the Transformer and compare it with GNN to analyze their different architectural characteristics. We then propose a modified Transformer-based universal neural network representation learning model NAR-Former V2.


You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection

Neural Information Processing Systems

Can Transformer perform 2\mathrm{D} object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2\mathrm{D} spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet- 1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. Code and pre-trained models are available at https://github.com/hustvl/YOLOS.


CascadeXML: Rethinking Transformers for End-to-end Multi-resolution Training in Extreme Multi-label Classification

Neural Information Processing Systems

Extreme Multi-label Text Classification (XMC) involves learning a classifier that can assign an input with a subset of most relevant labels from millions of label choices. Recent approaches, such as XR-Transformer and LightXML, leverage a transformer instance to achieve state-of-the-art performance. However, in this process, these approaches need to make various trade-offs between performance and computational requirements. A major shortcoming, as compared to the Bi-LSTM based AttentionXML, is that they fail to keep separate feature representations for each resolution in a label tree. We thus propose CascadeXML, an end-to-end multi-resolution learning pipeline, which can harness the multi-layered architecture of a transformer model for attending to different label resolutions with separate feature representations.


Rethinking Transformers in Solving POMDPs

Lu, Chenhao, Shi, Ruizhe, Liu, Yuyao, Hu, Kaizhe, Du, Simon S., Xu, Huazhe

arXiv.org Artificial Intelligence

Sequential decision-making algorithms such as reinforcement learning (RL) in real-world scenarios inevitably face environments with partial observability. This paper scrutinizes the effectiveness of a popular architecture, namely Transformers, in Partially Observable Markov Decision Processes (POMDPs) and reveals its theoretical limitations. We establish that regular languages, which Transformers struggle to model, are reducible to POMDPs. This poses a significant challenge for Transformers in learning POMDP-specific inductive biases, due to their lack of inherent recurrence found in other models like RNNs. This paper casts doubt on the prevalent belief in Transformers as sequence models for RL and proposes to introduce a point-wise recurrent structure. The Deep Linear Recurrent Unit (LRU) emerges as a well-suited alternative for Partially Observable RL, with empirical results highlighting the sub-optimal performance of the Transformer and considerable strength of LRU.


TSLANet: Rethinking Transformers for Time Series Representation Learning

Eldele, Emadeldeen, Ragab, Mohamed, Chen, Zhenghua, Wu, Min, Li, Xiaoli

arXiv.org Machine Learning

Time series data, characterized by its intrinsic long and short-range dependencies, poses a unique challenge across analytical applications. While Transformer-based models excel at capturing long-range dependencies, they face limitations in noise sensitivity, computational efficiency, and overfitting with smaller datasets. In response, we introduce a novel Time Series Lightweight Adaptive Network (TSLANet), as a universal convolutional model for diverse time series tasks. Specifically, we propose an Adaptive Spectral Block, harnessing Fourier analysis to enhance feature representation and to capture both long-term and short-term interactions while mitigating noise via adaptive thresholding. Additionally, we introduce an Interactive Convolution Block and leverage self-supervised learning to refine the capacity of TSLANet for decoding complex temporal patterns and improve its robustness on different datasets. Our comprehensive experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection, showcasing its resilience and adaptability across a spectrum of noise levels and data sizes. The code is available at \url{https://github.com/emadeldeen24/TSLANet}